Connect to AICoding.io.vn API with OpenAI and Anthropic format support
Dual-format connection algorithm for AICoding.io.vn API supporting both OpenAI-compatible and Anthropic-native message formats.
The AICoding Connection Algorithm provides flexible access to AICoding.io.vn's Claude AI models through both OpenAI-compatible chat completions and Anthropic's native messages endpoint.
Base URL: https://aicoding.io.vn
API Key: Environment variable AICODING_API_KEY or passed as parameter
Headers:
{
"Authorization": "Bearer {api_key}",
"Content-Type": "application/json",
"anthropic-version": "2023-06-01" # For Anthropic endpoints
}| Model | ID | Cost Multiplier |
|---|---|---|
| Claude Sonnet 4.5 | claude-sonnet-4-5-20250929 | 1x |
| Claude Opus 4.5 | claude-opus-4-5-20251001 | 1.5x |
| Claude Haiku 4.5 | claude-haiku-4-5-20251001 | 1x |
| Gemini 3 Pro | gemini-3-pro-preview | 1x |
| GPT-5.1 | gpt-5.1 | 1x |
| GPT-5.1-Codex | gpt-5.1-codex | 1x |
| GPT-5.2 | gpt-5.2 | 1x |
| GPT-5.2-Codex | gpt-5.2-codex | 1x |
| GLM-4.6 | glm-4.6 | 1x |
Test connection and discover models.
result = aicoding.execute({"action": "connect"})Get all available models.
result = aicoding.execute({"action": "list_models"})Send OpenAI-compatible chat requests.
result = aicoding.execute({
"action": "chat",
"model": "gpt-5.1-codex",
"messages": [
{"role": "system", "content": "You are a coding assistant."},
{"role": "user", "content": "Write a quicksort in Python"}
],
"temperature": 0.3,
"max_tokens": 1500
})Send Anthropic-native message requests.
result = aicoding.execute({
"action": "messages",
"model": "claude-sonnet-4-5-20250929",
"messages": [
{"role": "user", "content": "Explain quantum computing"}
],
"temperature": 0.7,
"max_tokens": 2000
})Check API health.
result = aicoding.execute({"action": "health"})
# Average: 317ms response timefrom core.algorithms.algorithm_manager import AlgorithmManager
manager = AlgorithmManager(auto_scan=True)
aicoding = manager.get_algorithm("AICodingConnection")
# Test connection
result = aicoding.execute({"action": "connect"})
print(f"Available models: {result.data['total_models']}")# Best for GPT models
result = aicoding.execute({
"action": "chat",
"model": "gpt-5.1-codex",
"messages": [
{"role": "user", "content": "Debug this code: ..."}
],
"temperature": 0.3
})
print(result.data['response'])
print(f"Tokens used: {result.data['usage']['total_tokens']}")# Best for Claude models (more efficient)
result = aicoding.execute({
"action": "messages",
"model": "claude-opus-4-5-20251001",
"messages": [
{"role": "user", "content": "Write a design doc for..."}
],
"temperature": 0.7,
"max_tokens": 4000
})
print(result.data['response'])| Feature | OpenAI Format | Anthropic Format |
|---|---|---|
| Endpoint | /v1/chat/completions | /v1/messages |
| Best For | GPT models | Claude models |
| System Message | In messages array | Separate field |
| Response Key | choices[0].message.content | content[0].text |
| Token Field | usage.total_tokens | usage.input_tokens + output_tokens |
Choose Right Format:
chat for GPT modelsmessages for Claude models (native format, more efficient)Token Management:
result.data['usage']Temperature Settings:
Error Handling:
result = aicoding.execute({...})
if result.status == "success":
# Process response
response = result.data['response']
elif result.status == "error":
# Handle error
print(f"Error: {result.error}")# Use AICoding for alternative Claude access
orchestrator_config = {
"primary": {"provider": "aicoding", "model": "claude-opus-4-5-20251001"},
"reviewer": {"provider": "v98", "model": "gpt-5.1-codex"},
"consultant": {"provider": "aicoding", "model": "claude-sonnet-4-5-20250929"}
}# Document generation workflow
workflow = [
{"algorithm": "AICodingConnection", "action": "messages", "model": "claude-sonnet..."},
{"algorithm": "CodeReviewer"},
{"algorithm": "DocumentationGenerator"}
]| Issue | Solution |
|---|---|
| 500 Server Error | API temporarily unavailable, retry with backoff |
| Model not available | Check model list with list_models |
| Token limit exceeded | Reduce max_tokens or split request |
| Health check fails | Verify network connectivity |
# Use appropriate model for task
tasks = {
"simple_qa": "claude-haiku-4-5-20251001", # Cheapest
"code_review": "gpt-5.1-codex", # Best for code
"complex_reasoning": "claude-opus-4-5-20251001" # Most capable (1.5x cost)
}D:\Antigravity\Dive AI\core\algorithms\operational\aicoding_connection.py
v1.0 - Initial release with dual-format support
20ba150
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.